Goto

Collaborating Authors

 arm movement


Examining the legibility of humanoid robot arm movements in a pointing task

Lúčny, Andrej, Antonj, Matilde, Mazzola, Carlo, Hornáčková, Hana, Farić, Ana, Malinovská, Kristína, Vavrecka, Michal, Farkaš, Igor

arXiv.org Artificial Intelligence

Human--robot interaction requires robots whose actions are legible, allowing humans to interpret, predict, and feel safe around them. This study investigates the legibility of humanoid robot arm movements in a pointing task, aiming to understand how humans predict robot intentions from truncated movements and bodily cues. We designed an experiment using the NICO humanoid robot, where participants observed its arm movements towards targets on a touchscreen. Robot cues varied across conditions: gaze, pointing, and pointing with congruent or incongruent gaze. Arm trajectories were stopped at 60\% or 80\% of their full length, and participants predicted the final target. We tested the multimodal superiority and ocular primacy hypotheses, both of which were supported by the experiment.


Image-driven Robot Drawing with Rapid Lognormal Movements

Berio, Daniel, Clivaz, Guillaume, Stroh, Michael, Deussen, Oliver, Plamondon, Réjean, Calinon, Sylvain, Leymarie, Frederic Fol

arXiv.org Artificial Intelligence

Large image generation and vision models, combined with differentiable rendering technologies, have become powerful tools for generating paths that can be drawn or painted by a robot. However, these tools often overlook the intrinsic physicality of the human drawing/writing act, which is usually executed with skillful hand/arm gestures. Taking this into account is important for the visual aesthetics of the results and for the development of closer and more intuitive artist-robot collaboration scenarios. We present a method that bridges this gap by enabling gradient-based optimization of natural human-like motions guided by cost functions defined in image space. To this end, we use the sigma-lognormal model of human hand/arm movements, with an adaptation that enables its use in conjunction with a differentiable vector graphics (DiffVG) renderer. We demonstrate how this pipeline can be used to generate feasible trajectories for a robot by combining image-driven objectives with a minimum-time smoothing criterion. We demonstrate applications with generation and robotic reproduction of synthetic graffiti as well as image abstraction.


Generating Realistic Arm Movements in Reinforcement Learning: A Quantitative Comparison of Reward Terms and Task Requirements

Charaja, Jhon, Wochner, Isabell, Schumacher, Pierre, Ilg, Winfried, Giese, Martin, Maufroy, Christophe, Bulling, Andreas, Schmitt, Syn, Haeufle, Daniel F. B.

arXiv.org Artificial Intelligence

The mimicking of human-like arm movement characteristics involves the consideration of three factors during control policy synthesis: (a) chosen task requirements, (b) inclusion of noise during movement execution and (c) chosen optimality principles. Previous studies showed that when considering these factors (a-c) individually, it is possible to synthesize arm movements that either kinematically match the experimental data or reproduce the stereotypical triphasic muscle activation pattern. However, to date no quantitative comparison has been made on how realistic the arm movement generated by each factor is; as well as whether a partial or total combination of all factors results in arm movements with human-like kinematic characteristics and a triphasic muscle pattern. To investigate this, we used reinforcement learning to learn a control policy for a musculoskeletal arm model, aiming to discern which combination of factors (a-c) results in realistic arm movements according to four frequently reported stereotypical characteristics. Our findings indicate that incorporating velocity and acceleration requirements into the reaching task, employing reward terms that encourage minimization of mechanical work, hand jerk, and control effort, along with the inclusion of noise during movement, leads to the emergence of realistic human arm movements in reinforcement learning. We expect that the gained insights will help in the future to better predict desired arm movements and corrective forces in wearable assistive devices.


Towards AI-controlled FES-restoration of arm movements: Controlling for progressive muscular fatigue with Gaussian state-space models

Wannawas, Nat, Faisal, A. Aldo

arXiv.org Artificial Intelligence

Reaching disability limits an individual's ability in performing daily tasks. Surface Functional Electrical Stimulation (FES) offers a non-invasive solution to restore the lost abilities. However, inducing desired movements using FES is still an open engineering problem. This problem is accentuated by the complexities of human arms' neuromechanics and the variations across individuals. Reinforcement Learning (RL) emerges as a promising approach to govern customised control rules for different subjects and settings. Yet, one remaining challenge of using RL to control FES is unobservable muscle fatigue that progressively changes as an unknown function of the stimulation, breaking the Markovian assumption of RL. In this work, we present a method to address the unobservable muscle fatigue issue, allowing our RL controller to achieve higher control performances. Our method is based on a Gaussian State-Space Model (GSSM) that utilizes recurrent neural networks to learn Markovian state-spaces from partial observations. The GSSM is used as a filter that converts the observations into the state-space representation for RL to preserve the Markovian assumption. Here, we start with presenting the modification of the original GSSM to address an overconfident issue. We then present the interaction between RL and the modified GSSM, followed by the setup for FES control learning. We test our RL-GSSM system on a planar reaching setting in simulation using a detailed neuromechanical model and show that the GSSM can help RL maintain its control performance against the fatigue.


Towards AI-controlled FES-restoration of arm movements: neuromechanics-based reinforcement learning for 3-D reaching

Wannawas, Nat, Faisal, A. Aldo

arXiv.org Artificial Intelligence

Reaching disabilities affect the quality of life. Functional Electrical Stimulation (FES) can restore lost motor functions. Yet, there remain challenges in controlling FES to induce desired movements. Neuromechanical models are valuable tools for developing FES control methods. However, focusing on the upper extremity areas, several existing models are either overly simplified or too computationally demanding for control purposes. Besides the model-related issues, finding a general method for governing the control rules for different tasks and subjects remains an engineering challenge. Here, we present our approach toward FES-based restoration of arm movements to address those fundamental issues in controlling FES. Firstly, we present our surface-FES-oriented neuromechanical models of human arms built using well-accepted, open-source software. The models are designed to capture significant dynamics in FES controls with minimal computational cost. Our models are customisable and can be used for testing different control methods. Secondly, we present the application of reinforcement learning (RL) as a general method for governing the control rules. In combination, our customisable models and RL-based control method open the possibility of delivering customised FES controls for different subjects and settings with minimal engineering intervention. We demonstrate our approach in planar and 3D settings.


Online Body Schema Adaptation through Cost-Sensitive Active Learning

Cunha, Gonçalo, Vicente, Pedro, Bernardino, Alexandre, Ribeiro, Ricardo, Moreno, Plínio

arXiv.org Artificial Intelligence

Humanoid robots have complex bodies and kinematic chains with several Degrees-of-Freedom (DoF) which are difficult to model. Learning the parameters of a kinematic model can be achieved by observing the position of the robot links during prospective motions and minimising the prediction errors. This work proposes a movement efficient approach for estimating online the body-schema of a humanoid robot arm in the form of Denavit-Hartenberg (DH) parameters. A cost-sensitive active learning approach based on the A-Optimality criterion is used to select optimal joint configurations. The chosen joint configurations simultaneously minimise the error in the estimation of the body schema and minimise the movement between samples. This reduces energy consumption, along with mechanical fatigue and wear, while not compromising the learning accuracy. The work was implemented in a simulation environment, using the 7DoF arm of the iCub robot simulator. The hand pose is measured with a single camera via markers placed in the palm and back of the robot's hand. A non-parametric occlusion model is proposed to avoid choosing joint configurations where the markers are not visible, thus preventing worthless attempts. The results show cost-sensitive active learning has similar accuracy to the standard active learning approach, while reducing in about half the executed movement.


Future musicians could be trained by AI – By Matthew Griffin Futurist and Keynote Speaker

#artificialintelligence

Created by scientists at Pompeu Fabra University in Spain the new system was trained using a gesture-recognising Myo armband that tracked the arm movements of a professional violinist as she used the Détaché, Martelé, Spiccato, Ricochet, Sautillé, Staccato and Bariolage bow techniques. Audio of the performances was recorded at the same time. The Machine Learning based algorithm then compared the arm movements to the corresponding audio, determining which movements created which sounds, within each technique. When the system was subsequently tasked with identifying the technique that a violinist was using, it could do so with an accuracy of over 94 percent. It is now hoped that once developed further the technology could be used to provide students with real-time feedback, showing them where their form deviates from that of the pros, and once the technology's refined then it won't be just constrained to teaching people how to play the violin – you can imagine it being used to help athletes up their game, and myriads of other applications. The research, which was led by David Dalmazzo and Rafael Ramírez, is described in a paper that was recently published in the journal Frontiers in Psychology.


Action Anticipation: Reading the Intentions of Humans and Robots

Duarte, Nuno Ferreira, Tasevski, Jovica, Coco, Moreno, Raković, Mirko, Billard, Aude, Santos-Victor, José

arXiv.org Artificial Intelligence

Humans have the fascinating capacity of processing non-verbal visual cues to understand and anticipate the actions of other humans. This "intention reading" ability is underpinned by shared motor-repertoires and action-models, which we use to interpret the intentions of others as if they were our own. We investigate how the different cues contribute to the legibility of human actions during interpersonal interactions. Our first contribution is a publicly available dataset with recordings of human body-motion and eye-gaze, acquired in an experimental scenario with an actor interacting with three subjects. From these data, we conducted a human study to analyse the importance of the different non-verbal cues for action perception. As our second contribution, we used the motion/gaze recordings to build a computational model describing the interaction between two persons. As a third contribution, we embedded this model in the controller of an iCub humanoid robot and conducted a second human study, in the same scenario with the robot as an actor, to validate the model's "intention reading" capability. Our results show that it is possible to model (non-verbal) signals exchanged by humans during interaction, and how to incorporate such a mechanism in robotic systems with the twin goal of : (i) being able to "read" human action intentions, and (ii) acting in a way that is legible by humans.


'Prosthesis' 15ft-tall 'anti-robot' exoskeleton to race

Daily Mail - Science & tech

A 15-foot tall racing exoskeleton that could soon be tearing across the Nevada desert has been presented at the International Consumer Electronics Show (CES) at Las Vegas this year. Creators say their creation'Prosthesis' can hit a top speed of roughly 20 miles per hour (32kmh) – and despite its imposing size it is nearly silent when it moves. They now want to create a'X1 Mech Racing League' where mechanical exoskeletons go head-to-head. The 8,000lb (3,600kg) 'anti-robot' is controlled by a human pilot who stands at the centre of the mechanical exoskeleton, using arm movements to drive it forward at terrifying speeds. The 8,000lb (3,600kg) 'anti-robot' is controlled by a human pilot who stands at the centre of the mechanical exoskeleton, using arm movements to drive it forward at terrifying speeds.


Scientists reveal the best and worst dance moves for women

Daily Mail - Science & tech

It's a question that has plagued humanity through the ages: What moves make all the difference when you hit the dance floor? Researchers have found that the sexiest dancers move their hips a lot, and shift their arms and thighs independently of one another. Scientists suggest that there could be evolutionary benefits to the dance moves we enjoy most. For example, they suggest that hip swing may be an explicitly feminine trait that shows fertility and youth. The ability to move limbs independently of one another could reveal good motor control, and hence healthy genes for procreation.